The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
End-to-end autonomous driving provides a feasible way to automatically maximize overall driving system performance by directly mapping the raw pixels from a front-facing camera to control signals. Recent advanced methods construct a latent world model to map the high dimensional observations into compact latent space. However, the latent states embedded by the world model proposed in previous works may contain a large amount of task-irrelevant information, resulting in low sampling efficiency and poor robustness to input perturbations. Meanwhile, the training data distribution is usually unbalanced, and the learned policy is hard to cope with the corner cases during the driving process. To solve the above challenges, we present a semantic masked recurrent world model (SEM2), which introduces a latent filter to extract key task-relevant features and reconstruct a semantic mask via the filtered features, and is trained with a multi-source data sampler, which aggregates common data and multiple corner case data in a single batch, to balance the data distribution. Extensive experiments on CARLA show that our method outperforms the state-of-the-art approaches in terms of sample efficiency and robustness to input permutations.
translated by 谷歌翻译
为设计控制器选择适当的参数集对于最终性能至关重要,但通常需要一个乏味而仔细的调整过程,这意味着强烈需要自动调整方法。但是,在现有方法中,无衍生物的可扩展性或效率低下,而基于梯度的方法可能由于可能是非差异的控制器结构而无法使用。为了解决问题,我们使用新颖的无衍生化强化学习(RL)框架来解决控制器调整问题,该框架在经验收集过程中在参数空间中执行时间段的扰动,并将无衍生策略更新集成到高级参与者 - 批判性RL中实现高多功能性和效率的体系结构。为了证明该框架的功效,我们在自动驾驶的两个具体示例上进行数值实验,即使用PID控制器和MPC控制器进行轨迹跟踪的自适应巡航控制。实验结果表明,所提出的方法的表现优于流行的基线,并突出了其强大的控制器调整潜力。
translated by 谷歌翻译
模型不合时宜的元学习(MAML)是最成功的元学习技术之一。它使用梯度下降来学习各种任务之间的共同点,从而使模型能够学习其自身参数的元定义,以使用少量标记的培训数据快速适应新任务。几次学习的关键挑战是任务不确定性。尽管可以从具有大量任务的元学习中获得强大的先验,但是由于训练数据集的数量通常太小,因此无法保证新任务的精确模型。在这项研究中,首先,在选择初始化参数的过程中,为特定于任务的学习者提出了新方法,以适应性地学习选择最小化新任务损失的初始化参数。然后,我们建议对元损失部分的两种改进的方法:方法1通过比较元损失差异来生成权重,以提高几个类别时的准确性,而方法2引入了每个任务的同质不确定性,以根据多个损失,以基于多个损失。原始的梯度下降是一种增强新型类别的概括能力的方式,同时确保了准确性的提高。与以前的基于梯度的元学习方法相比,我们的模型在回归任务和少量分类中的性能更好,并提高了模型的鲁棒性,对元测试集中的学习率和查询集。
translated by 谷歌翻译
最近基于进化的零级优化方法和基于策略梯度的一阶方法是解决加强学习(RL)问题的两个有希望的替代方案。前者的方法与任意政策一起工作,依赖状态依赖和时间扩展的探索,具有健壮性的属性,但遭受了较高的样本复杂性,而后者的方法更有效,但仅限于可区分的政策,并且学习的政策是不太强大。为了解决这些问题,我们提出了一种新颖的零级演员 - 批评算法(ZOAC),该算法将这两种方法统一为派对演员 - 批判性结构,以保留两者的优势。 ZOAC在参数空间,一阶策略评估(PEV)和零订单策略改进(PIM)的参数空间中进行了推出集合,每次迭代中都会进行推出。我们使用不同类型的策略在广泛的挑战连续控制基准上进行广泛评估我们的方法,其中ZOAC优于零阶和一阶基线算法。
translated by 谷歌翻译
在强化学习(RL)的试验和错误机制中,我们期望学习安全的政策时出现臭名昭着的矛盾:如何学习没有足够数据和关于危险区域的先前模型的安全政策?现有方法主要使用危险行动的后期惩罚,这意味着代理人不会受到惩罚,直到体验危险。这一事实导致代理商也无法在收敛之后学习零违规政策。否则,它不会收到任何惩罚并失去有关危险的知识。在本文中,我们提出了安全设置的演员 - 评论家(SSAC)算法,它使用面向安全的能量函数或安全索引限制了策略更新。安全索引旨在迅速增加,以便潜在的危险行动,这使我们能够在动作空间上找到安全设置,或控制安全集。因此,我们可以在服用它们之前识别危险行为,并在收敛后进一步获得零限制违规政策。我们声称我们可以以类似于学习价值函数的无模型方式学习能量函数。通过使用作为约束目标的能量函数转变,我们制定了受约束的RL问题。我们证明我们基于拉格朗日的解决方案确保学习的政策将收敛到某些假设下的约束优化。在复杂的模拟环境和硬件循环(HIL)实验中评估了所提出的算法,具有来自自动车辆的真实控制器。实验结果表明,所有环境中的融合政策达到了零限制违规和基于模型的基线的相当性能。
translated by 谷歌翻译
有很好的参数来支持声明,特征表示最终从一般到深度神经网络(DNN)的特定转换,但这种转变仍然相对缺乏缺陷。在这项工作中,我们向理解特征表示的转换来移动一个微小的步骤。我们首先通过分析中间层中的类分离,然后将类别分离过程作为动态图中的社区演变进行了描述。然后,我们介绍模块化,是图形理论中的常见度量,量化社区的演变。我们发现,随着层更深,而是下降或达到特定层的高原,模块化趋于上升。通过渐近分析,我们表明模块化可以提供对特征表示转换的定量分析。通过了解特征表示,我们表明模块化也可用于识别和定位DNN中的冗余层,这为图层修剪提供了理论指导。基于这种鼓舞人心的发现,我们提出了一种基于模块化的层面修剪方法。进一步的实验表明,我们的方法可以修剪冗余层,对性能的影响最小。该代码可在https://github.com/yaolu-zjut/dynamic-graphs-construction中获得。
translated by 谷歌翻译
安全是使用强化学习(RL)控制复杂动态系统的主要考虑,其中安全证书可以提供可提供的安全保证。有效的安全证书是指示安全状态具有低能量的能量功能,存在相应的安全控制策略,允许能量函数始终消散。安全证书和安全控制政策彼此密切相关,并挑战合成。因此,现有的基于学习的研究将它们中的任何一种视为先验知识,以便学习另一个知识,这限制了它们与一般未知动态的适用性。本文提出了一种新的方法,同时综合基于能量函数的安全证书,并使用CRL学习安全控制策略。我们不依赖于有关基于型号的控制器或完美的安全证书的先验知识。特别是,我们通过最小化能量增加,制定损耗功能来优化安全证书参数。通过将此优化过程作为外循环添加到基于拉格朗日的受限增强学习(CRL),我们共同更新策略和安全证书参数,并证明他们将收敛于各自的本地Optima,最佳安全政策和有效的安全性证书。我们在多个安全关键基准环境中评估我们的算法。结果表明,该算法学习无限制违规的可信安全的政策。合成安全证书的有效性或可行性也在数值上进行了验证。
translated by 谷歌翻译
交叉点是自主行驶中最复杂和事故的城市场景之一,其中制造安全和计算有效的决策是非微不足道的。目前的研究主要关注简化的交通状况,同时忽略了混合交通流量的存在,即车辆,骑自行车者和行人。对于城市道路而言,不同的参与者导致了一个非常动态和复杂的互动,从而冒着学习智能政策的困难。本文在集成决策和控制框架中开发动态置换状态表示,以处理与混合业务流的信号化交集。特别地,该表示引入了编码功能和总和运算符,以构建来自环境观察的驱动状态,能够处理不同类型和变体的交通参与者。构建了受约束的最佳控制问题,其中目标涉及跟踪性能,并且不同参与者和信号灯的约束分别设计以确保安全性。我们通过离线优化编码函数,值函数和策略函数来解决这个问题,其中编码函数给出合理的状态表示,然后用作策略和值函数的输入。禁止策略培训旨在重用从驾驶环境中的观察,并且使用时间通过时间来利用策略函数和编码功能联合。验证结果表明,动态置换状态表示可以增强IDC的驱动性能,包括具有大边距的舒适性,决策合规性和安全性。训练有素的驾驶政策可以实现复杂交叉口的高效和平滑通过,同时保证驾驶智能和安全性。
translated by 谷歌翻译
Zero-sum Markov Games (MGs) has been an efficient framework for multi-agent systems and robust control, wherein a minimax problem is constructed to solve the equilibrium policies. At present, this formulation is well studied under tabular settings wherein the maximum operator is primarily and exactly solved to calculate the worst-case value function. However, it is non-trivial to extend such methods to handle complex tasks, as finding the maximum over large-scale action spaces is usually cumbersome. In this paper, we propose the smoothing policy iteration (SPI) algorithm to solve the zero-sum MGs approximately, where the maximum operator is replaced by the weighted LogSumExp (WLSE) function to obtain the nearly optimal equilibrium policies. Specially, the adversarial policy is served as the weight function to enable an efficient sampling over action spaces.We also prove the convergence of SPI and analyze its approximation error in $\infty -$norm based on the contraction mapping theorem. Besides, we propose a model-based algorithm called Smooth adversarial Actor-critic (SaAC) by extending SPI with the function approximations. The target value related to WLSE function is evaluated by the sampled trajectories and then mean square error is constructed to optimize the value function, and the gradient-ascent-descent methods are adopted to optimize the protagonist and adversarial policies jointly. In addition, we incorporate the reparameterization technique in model-based gradient back-propagation to prevent the gradient vanishing due to sampling from the stochastic policies. We verify our algorithm in both tabular and function approximation settings. Results show that SPI can approximate the worst-case value function with a high accuracy and SaAC can stabilize the training process and improve the adversarial robustness in a large margin.
translated by 谷歌翻译